58 research outputs found

    Andrew Symington, Senior Horn Recital

    Get PDF

    Andrew Symington, Junior Horn Recital

    Get PDF

    Andrew Symington, Freshman Horn Recital

    Get PDF

    Cameron Swett, Trumpet, and Andrew Symington, French Horn, Sophomore Recital

    Get PDF

    A Hardware Testbed for Measuring IEEE 802.11g DCF Performance

    Get PDF
    The Distributed Coordination Function (DCF) is the oldest and most widely-used IEEE 802.11 contention-based channel access control protocol. DCF adds a significant amount of overhead in the form of preambles, frame headers, randomised binary exponential back-off and inter-frame spaces. Having accurate and verified performance models for DCF is thus integral to understanding the performance of IEEE 802.11 as a whole. In this document DCF performance is measured subject to two different workload models using an IEEE 802.11g test bed. Bianchi proposed the first accurate analytic model for measuring the performance of DCF. The model calculates normalised aggregate throughput as a function of the number of stations contending for channel access. The model also makes a number of assumptions about the system, including saturation conditions (all stations have a fixed-length packet to send at all times), full-connectivity between stations, constant collision probability and perfect channel conditions. Many authors have extended Bianchi's machine model to correct certain inconsistencies with the standard, while very few have considered alternative workload models. Owing to the complexities associated with prototyping, most models are verified against simulations and not experimentally using a test bed. In addition to a saturation model we considered a more realistic workload model representing wireless Internet traffic. Producing a stochastic model for such a workload was a challenging task, as usage patterns change significantly between users and over time. We implemented and compared two Markov Arrival Processes (MAPs) for packet arrivals at each client - a Discrete-time Batch Markovian Arrival Process (D-BMAP) and a modified Hierarchical Markov Modulated Poisson Process (H-MMPP). Both models had parameters drawn from the same wireless trace data. It was found that, while the latter model exhibits better Long Range Dependency at the network level, the former represented traces more accurately at the client-level, which made it more appropriate for the test bed experiments. A nine station IEEE 802.11 test bed was constructed to measure the real world performance of the DCF protocol experimentally. The stations used IEEE 802.11g cards based on the Atheros AR5212 chipset and ran a custom Linux distribution. The test bed was moved to a remote location where there was no measured risk of interference from neighbouring radio transmitters in the same band. The DCF machine model was fixed and normalised aggregate throughput was measured for one through to eight contending stations, subject to (i) saturation with fixed packet length equal to 1000 bytes, and (ii) the D-BMAP workload model for wireless Internet traffic. Control messages were forwarded on a separate wired backbone network so that they did not interfere with the experiments. Analytic solver software was written to calculate numerical solutions for thee popular analytic models for DCF and compared the solutions to the saturation test bed experiments. Although the normalised aggregate throughput trends were the same, it was found that as the number of contending stations increases, so the measured aggregate DCF performance diverged from all three analytic model's predictions; for every station added to the network normalised aggregate throughput was measured lower than analytically predicted. We conclude that some property of the test bed was not captured by the simulation software used to verify the analytic models. The D-BMAP experiments yielded a significantly lower normalised aggregate throughput than the saturation experiments, which is a clear result of channel underutilisation. Although this is a simple result, it highlights the importance of the traffic model on network performance. Normalised aggregate throughput appeared to scale more linearly when compared to the RTS/CTS access mechanism, but no firm conclusion could be drawn at 95% confidence. We conclude further that, although normalised aggregate throughput is appropriate for describing overall channel utilisation in the steady state, jitter, response time and error rate are more important performance metrics in the case of bursty traffic

    A Hardware Test Bed for Measuring IEEE 802.11g DCF Performance

    Get PDF
    The Distributed Coordination Function is one of three channel access control protocols specified by the IEEE 802.11 standard. In this paper we present a method of measuring DCF performance using a test bed built with off-the-shelf hardware. Performance is measured by normalized aggregate throughput as a function of the number of stations contending for channel access. We present measurements for both basic access and RTS/CTS access in fully-connected IEEE 802.11g networks experiencing conditions of saturation. We compare our measurements to results from three analytic models and a simulator, all of which shared the same assumptions about the workload model and operation of DCF. For small networks the analytic models predict a much lower performance than shown through simulation and test bed experiments. As the network grows, so the measured performance deteriorates significantly faster than predicted by the analytic models. We attribute this to inaccuracies in the analytic model, imperfect channels and queuing. The simulation results fit the measured data with more accuracy, as the simulator makes fewer restrictive assumptions about DCF when compared to the analytic models. This is the first paper to provide a cross-comparison of test bed, simulation and analytic results for IEEE 802.11g DCF performance

    Astrobee Robot Software: A Modern Software System for Space

    Get PDF
    Astrobee is a new free-flyer robot designed to operate inside the International Space Station (ISS). Astrobee capabilities include markerless navigation, autonomous docking for recharge, perching on handrails to minimize power and modular payloads. Astrobee will operate without crew support, controlled by teleoperation, plan execution, or on-board third parties software. This paper presents the Astrobee Robot Software, a NASA Open-Source project, powering the Astrobee robot. The Astrobee Robot Software relies on a distributed architecture based on the Robot Operating System (ROS). The software runs on three interconnected smart phone class processors. We present the software approach, infrastructure required, and main software components. The Astrobee Robot Software embrace modern software practices while respecting flight constraints. The paper concludes with the lessons learned, including examples usage of the software. Several research teams are already using the Astrobee Robot Software to develop novel projects that will fly on Astrobee

    Correspondence to General William Robertson Boggs, 1870s: January 21, 1875 - November 6, 1878

    Get PDF
    Boggs Papers, Box 1, Folder 2 Correspondence to General William Robertson Boggs, 1870s: January 21, 1875 - November 6, 1878https://digitalcommons.wofford.edu/littlejohnboggs/1001/thumbnail.jp

    Furthering the Application of Machine Learning to the Prediction of Oceanic Plankton Biomass

    Get PDF
    The Plankton Prediction System (PPS) is a joint project between the Computer Science and Zoology departments of the University of Cape Town. Its purpose is to research and develop machine-learning software capable of predicting the level and distribution of subsurface oceanic chlorophyll, given related data. In so doing the PPS provides marine biologists with valuable information that would otherwise be both time-consuming and expensive to retrieve. The work outlined in this paper furthers earlier research [9] by Fenn, Curtis and Oberholzer and, as well as addressing a few shortcomings, expands upon a number of topics that demanded closer investigation. The following five items were chosen by the 2006 project team as core research areas: 1. The production of a more structured and coherent set of data from which to perform predictions. 2. The effect of various clustering algorithms on depth profile data. 3. The use of a dynamic Bayesian network to incorporate the effect of time on chlorophyll predictions. 4. The use of topic maps as a means to dynamically display the relationship between data. 5. A greater degree of accompanying documentation and modular design. It is best to think of the work outlined in this paper as three stages in a pipeline. The first stage, preprocessing, is responsible for the integration of all the raw data from a number of different sources. After integration, the data is further discretized through a clustering process, which reduces its complexity. The second stage, prediction, is responsible for training a Dynamic Bayesian Network (DBN) with the clustered data produced in the preprocessing stage. Once training is complete, absent sub-surface chlorophyll data is inferred from the resultant network. The final stage in the PPS pipeline concerns itself with the visualization of the results obtained from both the preprocessing and prediction stages. Technologies, such as Topic Maps and hypergraphs are implemented to create a dynamic view of the relationship between data. Moreover, inference results are rendered as colour rasters for viewing within the web-based PPS interface
    corecore